Search Results: "Bernhard R. Link"

6 January 2007

Bernhard R. Link: clean vs. crowded bug pages

Marc Brockschmidt wrote the BTS is too crowded and Joey Hess objected that a too clean BTS can also be a bad sign.

I think both is true or to say better none of the ways makes sense without the other:

Bug reports are in my eyes one of the most valueable resources we have. Noone can test everything even in almost trivial packages. To archive quality we need the users input and a badly worded bug report is still better than no bug report at all. Our BTS is a very successfull tool in that as it lowers the barrier to report issues. No hassles to create (and wait for completion of) an account, no regrets by getting funny unparseable mails about some developer changing their e-mail addresses (did I already say I hate bugzilla?).

As those reports are valueable information, one should keep them as long as they can be usefull. Looking at the description of the wontfix tag shows that even a request that cannot be or should not be fixed in the current context is considered valueable. Most programs and designs change, and having a place to document workarounds and keep in memory what open problems exist.

On the other hand a crowded bug list is like a fridge you only put food into. Over time it will start to degrade into the most displeasing form of a compost heap. The same holds for bug reports:

Most bugs are easier when they are young: You most propably have the same version as the submitter somewhere, know what changed recently and when you can reproduce it you get some hints on what is happening and get add it. If you cannot reproduce it, the submitter might still be reachable for more information.

When the report is old, things get harder. Is the bug still present? Was it fixed in between by some upstream release? Is the submitter still reachable and does still remember what happened?

When I care enough of a problem to write a bug report and trying to supply a patch for it, I try to always take a look at the bug list and look for some other low hanging fruits to pick and submit some other patch, too. (After all, most of the time is spend trying to understand the package and the strange build system upstream choose instead of plain old autotools and not when fix the problem). But when it is hard to see the trees because of all the dead wood around it, and there is nothing to find with some way to reproduce it and one knows far too well that the most efficient steps would be a tedious search for old versions to see if that was a bug solved upstream many years ago, good intentions tend to melt like ice thrown in lava.

So, when I wrote both is true I meant that keeping real-world issues documented and aware is a good thing. But having bugs rot (and often they do), will pervert all the advantages. In the worst case, people will even stop submitting new reports as it takes to long to look at all the old ones to look for a dublicate.

3 November 2006

Bernhard R. Link: again compiler arguments

I know I repeat myself, but given current discusion, I simply feel the need to do so:

Please do not hide the arguments given to the compiler from me.

I cannot fix what I do not know if wrong. Maybe you can.

Keep the argument list tidy.

Many argument lists are longer than necessary. If there is some -I/<whatever> in the argument list on a Debian system, there is something fishy. (It&aposs not the universial collection of different stuff all going whereever it wants, after all). Common cases are:

- buggy scripts to add -I/usr/include - packages working around upstreams breaking compatibility - plainly broken upstreams - oversight

In short: if the line is too long, that is normaly a bug causing more pain than only a long line. Do something against those bugs, please. There is no need at all for a proper made library to give -I for stuff installed. It&aposs installed in the system, the default search path for /usr/include should suffice. It often does not, but that is simply bad design of that libraries interface. Do something against that, please! Also for stuff not installed, why do you need more than one -I? Are you embedding other libraries into your code? Why are they libraries if noone else uses them? If someone else uses them, why are they not made to proper library packages? If it is all intern stuff, why does it need so many include parts instead just a single include/ dir? Anf if it only needs one include dir why is it added a dozen times? What do you need -D for anything but paths? Ever heared of AM_CONFIG_HEADER?

And yes, I know many modern libraries are written by people never looked at anything but Windows when designing their headers. (Even some seem to have never looked at unixoid systems even after using them for decades). That is a problem, not something to be worked around with even more kludges. Kludges working around kludges are there to stay. So do not add them.

26 October 2006

Bernhard R. Link: "a speech for policy"

If you have to name a single thing that singles out Debian over all the other distributions in practical quality, then you cannot come up with anything but Debian having a policy, packages have to follow.

The little things make something feel raw or polished. Those things that one by itself look to unimportant by itself have real importance in their magnitude.

As with all rules, rulesets can become too large and become a obstacle. This can be avoided by being conservative and minimal in those rules, which Debian always already practised to the extreme.

Limiting this further down to things people deem as "important" will only futher reduce the overall quality. Instead of removing those few things that are in the policy, we should rather extend to make everything in current policy not met to be a bug (which can still be tagged wontfix or help) instead of reducing the rules found in policy or making more things non-binding.

2 October 2006

Bernhard R. Link: "current GR and release of etch"

I doubt the current vote can delay etch when accepted. There are many different GR suggestions out there to get additional exceptions for etch. And there is no dobut at all such expections will get accepted with a gigantic majority.

I see more danger to delay etch when the GR is not accepted but voted down. Then people will have much less ground for what to get exceptions and far less common ground on what to base all the GRs that are to come. And given the large amount of proposals on debian-vote, having many more GR will not help to get etch out.

Also note that if the GR is not accepted, there are many people believing that the current rules still apply and these rules are: source is needed for all bits in the Debian distribution, and much more things has to be ripped out than those misterious 6 months if no additional exceptions are voted on.

20 September 2006

Bernhard R. Link: Trademarks

If everyone thought that accepting bogus obligations just to be allowed to name something by it&aposs name, take a look at Eric Dorland&aposs blog or directly into the new problems.

My vote for this: Call it firesomething or mffbrowser or some other free name once and forall. With some luck somebody will then also write a nice patch to have a common Debian ca-certificate handling. (I&aposm sick of having to do anything twice, especially if it includes writing mozilla extensions adding a ca-certificate every time a user loads its config as I&aposm ignorant in all this stuff to know any better way). Having things as similar as possible in different environments is a nice goal, but having working solutions and the right to implement working solutions is much more important...

10 August 2006

Bernhard R. Link: Graphic Libraries

Wouter Verhelst asked why simple games are so slow nowadays.

I think the problems are in the libs. All this modern stuff tries to become more and more modern, and get more and more stuff out of all those new render extensions, direct graphics and hardware accelerations. There simply is no way to decide which way is faster, so libraries have to guess. So it is no suprise things go wrong. And the place they go wrong are of course not the fast computers, but the older stuff, that does not has those nifty accelerations and no fast CPU to cancel it out.

Another disadvantage are all those "portable" libaries. SDL for example needs three connections to the X server before it does anything. Three times etablishing a connection, checking of security cookies, and so on. Its API looks like living with windows, or never intended to be use for anything other than full-screen mode. (You want to find out how large the window is? Why should you be able to when you said the window cannot be resized?)

QT likes to use extensions, too. I don't know if it is its fault or newer X servers, but the newer your installation gets, the slower 2D games using QT can become. (Note the can, if you have the right graphics card, lucky you, if you have the wrong one, bad luck). To be fair QT is not supposed or designed for 2D games. On the other hand I don't know what it is supposed to do other than being a C++ compiler benchmark measured in hours.

GTK was such a promising design. Object orientated (widget classes are one of those very few things were object orientated can be used with more advantages than disadvantages) but still plain C, small and looking like designed for X. To be fair, I do not even know how well it performs, as the ever increasing library cancer drives me away. From the "users should not be able to change their homedir, that would be far too much the Unix way" glib, over all this miriad of differt little libraries, all moving all the time, spewing their headers in so many different directories a compiler invocation folds three time around your terminal.

Well, enough ranted. My next graphical program will use Athena Widgets. I only have to hope all this reanimated X development in the last time will not pull xlib away from under out feets in the future...

17 May 2006

Bernhard R. Link: When things suddenly go very fast

or in other words:
     grep -s -q 'dn\.regexp' /etc/ldap/slapd.conf && cat <<EOF
     Ha ha, sucker! Ever asked yourself why your ldap database is so fsck'ing
     slow despite all the caches and indices you added?
     EOF
     

11 April 2006

Bernhard R. Link: only DDs should be allowed to upload packages

Anthony Towns writes:

"Interestingly, the obvious way to solve the second and third problems is also to do away with sponsorship, but in a different sense - namely by letting the packager upload directly. Of course, that's unacceptable per se, since we rely on our uploaders to ensure the quality of their packages, so we need some way of differentiating people we can trust to know what they're doing, from people we can't trust or who don't yet know what they're doing."

I think the whole point of NM is to make sure we can trust people. This will be extremly different from sponsorship, as I hope no sponsor takes a packages and just uploads it, but makes sure it is as correct as any of his packages, using all his/her experience.

Even some little game or package for special use can cause severe headage, as the maintainer scripts can delete stuff outside that package or open security holes. Things having that much power should only be in the hands of people we actually know and trust. Thus some DD should be responsible. And I doubt that there are enough DDs wanting to be responsible for something another person does when they give a in blank upload privilege for some package without any chance to look what gets uploaded.

That said, I like the idea to make sure the Maintainer in the .changes file and the owner of the key that signed it are the same. (It's nicer to change it to get the mails yourself and bounce them to the person you are sponsoring, but I sometimes forget it). Does the field yet has any meaning other who get the mails from the queue daemons and dak?

26 March 2006

Bernhard R. Link: compiler arguments

Please do not hide the arguments given to the compiler from me.

It's hard to realize something is going wrong if you do not see what is happening. If the argument list is too long, do something against that instead of hiding it.

Make sure you follow policy when packaging software

Debian packages should be compiled with -Wall -g, but more and more do not. Please check you do, but check at the correct place. Do not look into the debian/rules file, but in the build log. If the Makefile sets a default with a single equal sign ("="), running 'CFLAGS="-Wall -g -O2" make' will not suffice. Try 'make CFLAGS="-Wall -g -O2"' instead. (Actually, there is no good reason to put them before the command. Always try to put things as arguments first, both with make or with ./configure)

It really makes everyone's live easier if those options are set.

Keep the argument list tidy.

Many argument lists are longer than necessary. If there is some -I/ in the argument list on a Debian system, there is something fishy. (It's not the universial collection of different stuff all going whereever it wants, after all). Common cases are:

- buggy scripts to add -I/usr/include

Better fix those scripts. Also make sure they do not cause other problems, like linking your program against libraries your program does not use directly. (Possibly causing funny segfaults when those libs link against other versions of those libraries)

- -I/usr/X11R6/include

For upstream packages this might perhaps be useful to support older operating systems and people unable to give it to CFLAGS themselves. But for FHS systems, this is not needed at all, as it mandates this handy /usr/include/X11 -> /usr/X11R6/include/X11 symlink. And newer X directly puts the headers in the correct place.

- packages working around upstreams breaking compatibility

Life would be too easy if upstream would not break APIs. But if they make a new incompatible version, and even change the library name for that, would it have been that difficult to also change what programs written/ported for that new incompatible API have to place in their #include line?

- plainly broken upstreams

putting stuff in $ PREFIX /include/subdir/ and #include'ing other files from that subdirectory without the subdir deserves application of some large LART.

- oversight

often it is just not necessary, and everything gets much more readable and easier if left away.

Other things makeing things unreadable are large armounts of -Ds generated by ./configure. AM_CONFIG_HEADER can help here a lot with non-path stuff. Stuff containing paths is suprisingly often not used at all.

27 February 2006

Bernhard R. Link: Gnu FDL

My suggestion for the GFDL vote is 1342

( 1 ) Choice 1: "GFDL-licensed works are unsuitable for main in all cases"

of course that only means documents only available under FDL or only available under FDL or other non-free software licenses. Documents also available under BSD, GPL or whatever are still free. That "in all cases" means without looking at the loudness of the proponents of some document.

( 3 ) Choice 2: "GFDL-licensed works without unmodifiable sections are free"

This does not mean "without unmodifiable sections", it means "without additional unmodifable sections". FDL has always to include the license within the work. (I still do not know how to include the license within a binary easily. But as the FDL as GPL-incompatible anyway it perhaps makes such work-flows impossible, anyway).

( 4 ) Choice 3: "GFDL-licensed works are compatible with the DFSG [needs 3:1]"

That's even worse. We have non-free for non-free stuff some of our users might not live without. (Or think so). Foist non-free stuff on them will severly hurt them in the long run.

( 2 ) Choice 4: Further discussion

Don't forget this option. If you do not like choice 2 (perhaps because you think like me that it is almost choice 3), rank 4 above it. Otherwise with equally many [3214] and [1234] votes, choice 2 would most likely win.

So only rank 2 above 4, if you want to see 2 in action. Otherwise vote 4 over 2. (same with 3 and 4, but 3 does not look so innocent as 2)

16 January 2006

Bernhard R. Link: Silver Plate

I just feel like quoting some passage from the Debian Developer's Reference:

A big part of your job as Debian maintainer will be to stay in contact with the upstream developers. Debian users will sometimes report bugs that are not specific to Debian to our bug tracking system. You have to forward these bug reports to the upstream developers so that they can be fixed in a future upstream release.

While it's not your job to fix non-Debian specific bugs, you may freely do so if you're able. When you make such fixes, be sure to pass them on to the upstream maintainers as well. Debian users and developers will sometimes submit patches to fix upstream bugs -- you should evaluate and forward these patches upstream.

(that's from 3.5 in case anyone wants to look up it there)

16 December 2005

Bernhard R. Link: Why not CVS?

To Wouter: No, I never used anything else than CVS for everything serious. Whenever I tried any of them for something (mostly because someone else used it for something I wanted to work on) they simply broke. I don't want debug my tools or use funny workarounds but get some work done on what I use the tools for. Using anything not in a Debian stable release is hardly acceptable for me (remember, it are tools), but when then even when the testing or unstable versions are not enough for simple tasks, it's just too bleeding edge for me.

"only suggests you haven't seen many large projects in the heat of code change"

That's simply a matter of style. If a checkin means a full compile, manually reading the diff and a minimal checking for correctness, writing Changelog entries and possibly adopting the documentation there is simply no need to handle checkins with a sub-minute resulution.

"Far too often have I seen people afraid to reorganize their code because that would lose history on the files."

That's a major problem, but the problem is the fear. No rcs will ever be able to track history for even most common possible reorganisations of code. Limiting yourself to what your rcs can cope with is the main problem, the abilities of your rcs are a minor one.

"How about the fact that upstream CVS development is rather extremely dead, [...]"

I prefer tools being able to do what I need over tools that will be able to adopt my needs normaly. Active development means when I encounter a bug I either have to wait a year until it does no longer bother me, or wait a week and update software on every computer I want to use on, possibly localy in my user account if I do no administrate the computer or the behaviour changes so much other usages are broken. Leading to problems to live within my disk quota and so on.

Don't understand me wrong, I'm not against SVN. I guess now (several years after everyone was already told to not use that old fashioned CVS, but not SVN version N but version N+1, because N was too brokenr; for several versions of N) it is quite useable. And things like atomic commits might even make it favoruable over CVS for larger projects. But not every project can be within the top ten list of size, coding and commit styles differ. But I believe for many people, the ratio of advantages to disadvantages still points into another direction.

13 December 2005

Bernhard R. Link: Why not CVS?

With this rcs debate currently on planet.debian.org I feeled the need to add some toughts.

My point is mainly: Why not simply stick to plain old CVS?

The pros are easily collected: it's installed everywhere, almost everyone know at least the basic commands, and it is rock solid technology without all those little nasty bugs the newer ones have all the time.

Most of the contra arguments are not applicaple to me, so how can they to anybody else? ;-)

Like changelog messages: I write a Changelog after a patch, because I look at the patch for doing so. After all, that is what the Changelog is supposed to document, not what I though I did. (And looking at this is always a good way to catch some obvious mistakes one did).

Making multiple patched off other people's projects: Two versions of the directories you are working on is all that is needed. Change the one, make diffs compared to the other. Revert the diff (patch -R or just answer often enough), change the diff to what it should be, reapply it to make sure it still works, test it, revert it again. To make another patch for the same original software, continue from the begining, otherwise apply the patch to both copies. Just works. Easier than any darcs or co, even if that would not core dump, go into endless loops or play dead dog.

Even the non-exotic new systems still have plenty of features I never needed:

Something has to be really big moving files around is needed at all. And if it is needed, just delete it here and add it there. That looses a bit of history, but that is still found in the older place's history. Moving whole files is there only a special case of moving routines between files while refactorisation, one sometimes just has to look somewhere else.

Even for svn's global revision numbers I have not yet found a use. Being used to cvsish tagging removes the need if thinking before, and between commits there is normaly at least a quarter of an hour, so date based indexing always works.

So, what are we talking about?

11 November 2005

Bernhard R. Link: Would you have seen the bug

... if not told it is in there:

     ssize_t wasread = read(fs,buffer,toread);
     if( read > 0 )  
     

Next.

Previous.